Black-box Importance Sampling

نویسندگان

  • Qiang Liu
  • Jason D. Lee
چکیده

Importance sampling is widely used in machine learning and statistics, but its power is limited by the restriction of using simple proposals for which the importance weights can be tractably calculated. We address this problem by studying black-box importance sampling methods that calculate importance weights for samples generated from any unknown proposal or black-box mechanism. Our method allows us to use better and richer proposals to solve difficult problems, and (somewhat counter-intuitively) also has the additional benefit of improving the estimation accuracy beyond typical importance sampling. Both theoretical and empirical analyses are provided.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Overdispersed Black-Box Variational Inference

We introduce overdispersed black-box variational inference, a method to reduce the variance of the Monte Carlo estimator of the gradient in black-box variational inference. Instead of taking samples from the variational distribution, we use importance sampling to take samples from an overdispersed distribution in the same exponential family as the variational approximation. Our approach is gene...

متن کامل

Perturbative Black Box Variational Inference

Black box variational inference (BBVI) with reparameterization gradients triggered the exploration of divergence measures other than the Kullback-Leibler (KL) divergence, such as alpha divergences. In this paper, we view BBVI with generalized divergences as a form of estimating the marginal likelihood via biased importance sampling. The choice of divergence determines a bias-variance trade-off ...

متن کامل

Appendix for “Black-Box Importance Sampling”

which can be shown to be equivalent to MMDH(q, p) 2 = Ex,x′∼p[k(x, x′)]− 2Ex∼p;y∼q[k(x, y)] + Ey,y′∼q[k(y, y′)]. We show that kernelized discrepancy is equivalent to MMDHp(q, p), equipped with the p-Steinalized kernel kp(x, x ′). Proposition 1.1. Assume (3) is true, we have S(q, p) = MMDHp(q, p). Proof. Simply note that Ex′∼p[kp(x, x′)] = 0 for any x, we have MMDHp(q, p) 2 = Ex,x′∼q[kp(x, x′)] ...

متن کامل

Sequential Design for Achieving Estimated Accuracy of Global Sensitivities

Global sensitivity analysis provides information on the relative importance of the input variables for simulator functions used in computer experiments. It is more conclusive than screening methods for determining if a variable is influential, especially if a variable’s influence is derived from its interactions with other variables. In this paper, we develop a method for providing global sensi...

متن کامل

Markov Chain Truncation for Doubly-Intractable Inference

Computing partition functions, the normalizing constants of probability distributions, is often hard. Variants of importance sampling give unbiased estimates of a normalizer Z, however, unbiased estimates of the reciprocal 1/Z are harder to obtain. Unbiased estimates of 1/Z allow Markov chain Monte Carlo sampling of “doubly-intractable” distributions, such as the parameter posterior for Markov ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017